1. Prelude

This is the second document of the CI/CD pipeline. This time we will use our own project from java-spring-test-p2 with slight modification (I will provide the modified version). This time we will seperate the pipeline into three parts: CI, CD, and automation part. Based on limitation and time constraint, automation part cannot be finished but will giving some guideline here and will be finished in the next document.

2. Setup

The following are things to be installed before the tutorial. The machine used here are specifically instructed for MAC user.

  1. Virtualbox (v. 6.1)
  2. Docker Engine/ Desktop
  3. Docker registry (Docker Hub with a public repository > there is some problem currently with private settings and need to further investigation)
  4. Three private Github repositories (one named source, one named GitOps, and the other named maven-repo(need a README file created))
  5. Eclipse IDE

3. Preparation

3.1 Install and start minikube cluster

Please refer to the previous document for installation of minikube and kubectl.

We will use following configuration for minikube (optionally):

  minikube config set disk-size 100000
  minikube config set container-runtime containerd
  minikube config set driver virtualbox
  minikube config set memory 4000

Above config will will take effect upon a minikube delete and then a minikube start.

To start minikube:

  minikube start

3.2 Build Image application image manually and upload to Docker Hub

Open Eclipse, and import maven project from java_project directory. If you are first time installing Eclipse. make sure to include lombok.jar into eclipse. The steps are as follow:

  1. Download lombok.jar
  2. Put lombok.jar in /Applications/Eclipse.app/Contents/Eclipse (The actual path will need to be adjusted according to user setup)
  3. modify eclipse.ini under same folder at the bottom of the file to include:
  -Xbootclasspath/a:lombok.jar
  -javaagent:/Applications/Eclipse.app/Contents/Eclipse/lombok.jar
  1. Restart eclipse and check Eclipse -> About Eclipse:

In service-discovery/pom.xml, make sure the to add the lombok version you see above:

  <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <version>1.18.22</version>  #match your downloaded version
  </dependency>

In both pom.xml under account-service/ and service-discovery/, you need to change the output image name according to your dockerhub repository name under build.plugins.plugin.configuration.image.name

(Be sure your Docker Desktop/Engine is running)

  • right click account-service/pom.xml -> Run as -> Maven build… and type spring-boot:build-image -DskipTests -Dspring-boot.build-image.skip=false

  • right click service-discovery/pom.xml -> Run as -> Maven build… and type spring-boot:build-image

Now just use docker push command to push two local images to your online docker repository (docker hub).

  docker push <docker_username/repository>:account-service_0.0.7-SNAPSHOT
  docker push <docker_username/repository>:service-discovery_0.0.1-SNAPSHOT

Finally we need to create a frontend image.

  cd java_project/frontend
  docker build -t <docker_username/repository>:frontend-app .
  docker push <docker_username/repository>:frontend-app

3.3 change settings.xml

The settings.xml in java_project is to configure maven to use remote server as artifact repo. It contains access credential. Be sure to change username and password to your own github username and github access token. The step to generate personal access token is in document 1 section 4.

3.4 Create github access key

In document 1, we created different deploy key pair for each repository. This time we will create a common key for a user to access different repository.

First, the same procedure:

  ssh-keygen -t rsa -b 4096
  # Enter file in which to save the key: common
  # Enter passphrase: [press Enter/return]

Copy the public key:

  pbcopy < common.pub

Follow the steps below:

  1. Go to github user settings and click SSH and GPG keys

  1. Click New SSH Key

  1. Fill in title (any name) and paste the public key in the Key box and click Add SSH key.

3.5 Generate secret for docker hub

Type the following to generate an access secret to the docker hub:

kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

3.6 Fill out Name of Public Profile on Github

During the Tekton pipeline, when we push artifact to maven-repo, the plugin doing this job requires user to fill out the name of your github account. So go to Settings -> Profile

4. ArgoCD Deployment

Install ArgoCD:

  kubectl create namespace argocd

  kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Wait for all the created pods to be in the ready state:

  kubectl get pods -n argocd -w

Now to access argocd UI:

  kubectl port-forward -n argocd svc/argocd-server 8080:443

You can then access to the web UI from the following url https://localhost:8080 with

  username: admin
  password: <kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d>

Under gitops directory apply regsecret.yaml, private-repo.yaml, and application.yaml file. This will configured dockerhub private key and setup SSH connection to GitOps repository. The application.yaml will pull any artifact from GitOps repository to spin the application pods.

  cd gitops

Before that we need to modify several files:

  1. Several deployment files
  Change spec.containers.image in the following files according to yours:
  - dev/account-deployment.yaml
  - dev/discovery-deployment.yaml
  - dev/frontend-deployment.yaml
  1. regesecret.yaml
  kubectl get secret regsecret --output=yaml > regsecret.yaml
  # remove metadata.creationTimestamp, metadata.namespace, metadata.resourceVersion, metadata.uid
  # add metadata.namespace: argocd (yaml convention)
  1. private-repo.yaml
  pbcopy < common (common file in 3.1)
  # paste in under stringData.sshPrivateKey
  1. application.yaml
  # change spec.source.repoURL to your gitops ssh url

Now first push local gitops to your online GitOps.

Then you can apply these yaml files:

  kubectl apply -f regsecret.yaml
  kubectl apply -f private-repo.yaml
  kubectl apply -f application.yaml

After several minutes during the first run, you will see the following that shows success:

To access website service, we can type the following to see what service is exposed to NodePort:

  kubectl get svc -n myapp

We will see we can access account-service (backend), frontend-service (frontend), discovery-service (service discovery). Then we can look at the url by typing:

  minikube service frontend-service -n myapp --url

5. Tekton Pipeline

To install the core component of Tekton, Tekton pipeline:

  kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

We will set “tekton-pipelines” to current namespace by running:

  kubectl config set-context --current --namespace=tekton-pipelines

To run a CI/CD workflow, we need to provide Tekton a Persistent Volume for storage purposes.

  kubectl apply -f tekton/pv_1.yaml

To install and run dashboard:

  kubectl apply --filename https://github.com/tektoncd/dashboard/releases/latest/download/tekton-dashboard-release.yaml
  kubectl --namespace tekton-pipelines port-forward svc/tekton-dashboard 9097:9097

Then you can access tekton dashboard UI at http://localhost:9097

Now setup service account which provides identity and secrets for processes that run in pods:

  kubectl get secret regsecret -n default --output=yaml > tekton/regsecret.yaml
  # remove metadata.creationTimestamp, metadata.namespace, metadata.resourceVersion, metadata.uid
  # add metadata.namespace: tekton-pipelines (yaml convention)
  
  cat common | base64
  # paste it in tekton/tekton-git-ssh-secret.yaml's data.ssh-privatekey

After modification, apply the following files:

  kubectl apply -f tekton/regsecret.yaml
  kubectl apply -f tekton/tekton-git-ssh-secret.yaml
  kubectl apply -f tekton/serviceaccount.yaml

Now we can setup pipelines and tasks. However this time we will create a persistent volume claim for our pipelines so every rerun of pipeline does not need to download maven dependency again.

  kubectl apply -f tekton/maven-repo-pvc.yaml
  kubectl apply -f tekton/pipeline.yaml
  kubectl apply -f tekton/task-build-push.yaml

Now push local java_project to github source repository.

To execute the pipeline:

  # Make sure that maven github repository has README.md created in the first place.
  # Before typing command, change buildRevision, appGitUrl, configGitUrl, mavenRepoUrl, appImage
  kubectl create -f tekton/pipelineruns/pipelinerun.yaml

Note that in this tutorial, I have not completed every task and pipeline. Also, during the build-image step it will fail because I have not created an image or environment that has docker engine inside. We will defer that to the next document. Also this tekton build is different from that of document 1 in that we set up a persistentVolumeClaim this time and it will store maven dependencies into local storage, and during the subsequent pipelineruns won’t need to download dependencies again.

The following is the end result of our tekton pipeline so far:

6. Automation

First we need to install tekton trigger kubernetes resource:

  kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml

Then we will create trigger secret for github and serviceaccount that use the secret:

  kubectl apply -f tekton/pipelinetriggers/github-trigger-secret.yaml
  kubectl apply -f tekton/pipelinetriggers/serviceaccount.yaml

Next we will need trigger template and trigger binding. A TriggerTemplate is a resource that specifies a blueprint for the resource, such as a TaskRun or PipelineRun, that you want to instantiate and/or execute when your EventListener detects an event. And TriggerBinding is responsible to bind the Event payload with Template Parameters.

  # Change appGitUrl, configGitUrl, mavenRepoUrl, appImage
  kubectl apply -f tekton/pipelinetriggers/triggerTemplate.yaml

Then we need actual a eventlistener, which is primary interace for external sources to send events, that will trigger the creation of Tekton resources defined as part of the TriggerTemplate.

  kubectl apply -f tekton/pipelinetriggers/eventlistener.yaml
  kubectl get pods,svc -leventlistener=github-listener-interceptor

Then you will see eventlistener create a kubernetes service:

If you need the EventListner service to be available outside of the cluster thereby you can use them as part of the webhook, the service el-github-listener-interceptor needs to be exposed via Ingress. However we will just expose through node port since we do not have static ip in which connect to pubic github webhook. We will use local git server in the production environment, and we will defer this to the next document.

  kubectl apply -f tekton/pipelinetriggers/event-nodeport.yaml
  minikube service github-eventlistener-service -n tekton-pipelines --url

When you access through the web, you can see a event trigger.

This is the end of the second document. In this document we separate the concern into three parts. We dissect problems into manageable components albeit incomplete due to network and time constraint. We will complete full CI/CD pipeline in the final document.